Burnaby
Large-Scale Multi-Robot Coverage Path Planning on Grids with Path Deconfliction
Tang, Jingtao, Mao, Zining, Ma, Hang
Abstract--We study Multi-Robot Coverage Path Planning (MCPP) on a 4-neighbor 2D grid G, which aims to compute paths for multiple robots to cover all cells of G. Traditional approaches are limited as they first compute coverage trees on a quadrant coarsened grid H and then employ the Spanning Tree Coverage (STC) paradigm to generate paths on G, making them inapplicable to grids with partially obstructed 2 2 blocks. To address this limitation, we reformulate the problem directly on G, revolutionizing grid-based MCPP solving and establishing new NP-hardness results. We introduce Extended-STC (ESTC), a novel paradigm that extends STC to ensure complete coverage with bounded suboptimality, even when H includes partially obstructed blocks. These methods then apply the Spanning Tree Coverage (STC) [17] paradigm to generate coverage I. Coverage Path Planning (CPP) addresses the problem of determining However, operating exclusively on the coarsened grid H has a path that fully covers a designated workspace [1]. First, it fails in cases where H is This problem is essential for a broad spectrum of robotic incomplete--that is, when any 2 2 blocks contain obstructed applications, from indoor tasks like vacuum cleaning [2] and grid cells absent from G. Second, even optimal coverage trees inspection [3] to outdoor activities such as automated harvesting on H do not necessarily result in an optimal MCPP solution (as [4], planetary exploration [5], and environmental monitoring illustrated in Figure 1-(b) and (c)), as evidenced by an asymptotic [6]. Multi-Robot Coverage Path Planning (MCPP) is an suboptimality ratio of four for makespan minimization [14], extension of CPP tailored for multi-robot systems, aiming to since the paths derived from circumnavigating coverage trees coordinate the paths of multiple robots to collectively cover the of H constitute only a subset of all possible sets of coverage given workspace, thereby enhancing both task efficiency [7] The authors are with the School of Computing Science, Simon to discuss the structure and topology of G more precisely, especially in the Fraser University, Burnaby, BC V5A1S6, Canada. The robots require a cost of 1 to traverse between adjacent vertices of G. (a) Single-robot coverage path LS-MCPP but also those generated by existing MCPP methods, to effectively resolve conflicts between robots We revolutionize solving MCPP on grid graphs, overcoming and accounts for turning costs, further enhancing the the above limitations through a two-phase approach that first practicability of the solutions. Our algorithmic contribution are detailed in real-world robotics applications.
A Study of Vulnerability Repair in JavaScript Programs with Large Language Models
Le, Tan Khang, Alimadadi, Saba, Ko, Steven Y.
In recent years, JavaScript has become the most widely used programming language, especially in web development. However, writing secure JavaScript code is not trivial, and programmers often make mistakes that lead to security vulnerabilities in web applications. Large Language Models (LLMs) have demonstrated substantial advancements across multiple domains, and their evolving capabilities indicate their potential for automatic code generation based on a required specification, including automatic bug fixing. In this study, we explore the accuracy of LLMs, namely ChatGPT and Bard, in finding and fixing security vulnerabilities in JavaScript programs. We also investigate the impact of context in a prompt on directing LLMs to produce a correct patch of vulnerable JavaScript code. Our experiments on real-world software vulnerabilities show that while LLMs are promising in automatic program repair of JavaScript code, achieving a correct bug fix often requires an appropriate amount of context in the prompt.
Online Probabilistic Model Identification using Adaptive Recursive MCMC
Agand, Pedram, Chen, Mo, Taghirad, Hamid D.
Although the Bayesian paradigm offers a formal framework for estimating the entire probability distribution over uncertain parameters, its online implementation can be challenging due to high computational costs. We suggest the Adaptive Recursive Markov Chain Monte Carlo (ARMCMC) method, which eliminates the shortcomings of conventional online techniques while computing the entire probability density function of model parameters. The limitations to Gaussian noise, the application to only linear in the parameters (LIP) systems, and the persistent excitation (PE) needs are some of these drawbacks. In ARMCMC, a temporal forgetting factor (TFF)-based variable jump distribution is proposed. The forgetting factor can be presented adaptively using the TFF in many dynamical systems as an alternative to a constant hyperparameter. By offering a trade-off between exploitation and exploration, the specific jump distribution has been optimised towards hybrid/multi-modal systems that permit inferences among modes. These trade-off are adjusted based on parameter evolution rate. We demonstrate that ARMCMC requires fewer samples than conventional MCMC methods to achieve the same precision and reliability. We demonstrate our approach using parameter estimation in a soft bending actuator and the Hunt-Crossley dynamic model, two challenging hybrid/multi-modal benchmarks. Additionally, we compare our method with recursive least squares and the particle filter, and show that our technique has significantly more accurate point estimates as well as a decrease in tracking error of the value of interest.
Learned Point Cloud Compression for Classification
Deep learning is increasingly being used to perform machine vision tasks such as classification, object detection, and segmentation on 3D point cloud data. However, deep learning inference is computationally expensive. The limited computational capabilities of end devices thus necessitate a codec for transmitting point cloud data over the network for server-side processing. Such a codec must be lightweight and capable of achieving high compression ratios without sacrificing accuracy. Motivated by this, we present a novel point cloud codec that is highly specialized for the machine task of classification. Our codec, based on PointNet, achieves a significantly better rate-accuracy trade-off in comparison to alternative methods. In particular, it achieves a 94% reduction in BD-bitrate over non-specialized codecs on the ModelNet40 dataset. For low-resource end devices, we also propose two lightweight configurations of our encoder that achieve similar BD-bitrate reductions of 93% and 92% with 3% and 5% drops in top-1 accuracy, while consuming only 0.470 and 0.048 encoder-side kMACs/point, respectively. Our codec demonstrates the potential of specialized codecs for machine analysis of point clouds, and provides a basis for extension to more complex tasks and datasets in the future.
The Bayan Algorithm: Detecting Communities in Networks Through Exact and Approximate Optimization of Modularity
Aref, Samin, Chheda, Hriday, Mostajabdaveh, Mahdi
Community detection is a classic problem in network science with extensive applications in various fields. Among numerous approaches, the most common method is modularity maximization. Despite their design philosophy and wide adoption, heuristic modularity maximization algorithms rarely return an optimal partition or anything similar. We propose a specialized algorithm, Bayan, which returns partitions with a guarantee of either optimality or proximity to an optimal partition. At the core of the Bayan algorithm is a branch-and-cut scheme that solves an integer programming formulation of the modularity maximization problem to optimality or approximate it within a factor. We compare Bayan against 30 alternative community detection methods using structurally diverse synthetic and real networks. Our results demonstrate Bayan's distinctive accuracy and stability in retrieving ground-truth communities of standard benchmark graphs. Bayan is several times faster than open-source and commercial solvers for modularity maximization making it capable of finding optimal partitions for instances that cannot be optimized by any other existing method. Overall, our assessments point to Bayan as a suitable choice for exact maximization of modularity in real networks with up to 3000 edges (in their largest connected component) and approximating maximum modularity in larger instances on ordinary computers. A Python implementation of the Bayan algorithm (the bayanpy library) is publicly available through the package installer for Python (pip).
Hierarchical Planning and Policy Shaping Shared Autonomy for Articulated Robots
Yousefi, Ehsan, Chen, Mo, Sharf, Inna
In this work, we propose a novel shared autonomy framework to operate articulated robots. We provide strategies to design both the task-oriented hierarchical planning and policy shaping algorithms for efficient human-robot interactions in context-aware operation of articulated robots. Our framework for interplay between the human and the autonomy, as the participating agents in the system, is particularly influenced by the ideas from multi-agent systems, game theory, and theory of mind for a sliding level of autonomy. We formulate the sequential hierarchical human-in-the-loop decision making process by extending MDPs and Options framework to shared autonomy, and make use of deep RL techniques to train an uncertainty-aware shared autonomy policy. To fine-tune the formulation to a human, we use history of the system states, human actions, and their error with respect to a surrogate optimal model to encode human's internal state embeddings, beyond the designed values, by using conditional VAEs. We showcase the effectiveness of our formulation for different human skill levels and degrees of cooperativeness by using a case study of a feller-buncher machine in the challenging tasks of timber harvesting. Our framework is successful in providing a sliding level of autonomy from fully autonomous to fully manual, and is particularly successful in handling a noisy non-cooperative human agent in the loop. The proposed framework advances the state-of-the-art in shared autonomy for operating articulated robots, but can also be applied to other domains where autonomous operation is the ultimate goal.
A Survey on Cross-Architectural IoT Malware Threat Hunting
Raju, Anandharaju Durai, Abualhaol, Ibrahim, Giagone, Ronnie Salvador, Zhou, Yang, Huang, Shengqiang
In recent years, the increase in non-Windows malware threats had turned the focus of the cybersecurity community. Research works on hunting Windows PE-based malwares are maturing, whereas the developments on Linux malware threat hunting are relatively scarce. With the advent of the Internet of Things (IoT) era, smart devices that are getting integrated into human life have become a hackers highway for their malicious activities. The IoT devices employ various Unix-based architectures that follow ELF (Executable and Linkable Format) as their standard binary file specification. This study aims at providing a comprehensive survey on the latest developments in cross-architectural IoT malware detection and classification approaches. Aided by a modern taxonomy, we discuss the feature representations, feature extraction techniques, and machine learning models employed in the surveyed works. We further provide more insights on the practical challenges involved in cross-architectural IoT malware threat hunting and discuss various avenues to instill potential future research.
Machine Learning Engineer at Huawei Technologies Canada Co., Ltd. - Burnaby, BC, Canada
With 194,000 employees and operating in more than 170 countries and regions, Huawei is a leading global creator and provider of information and communications technology (ICT) infrastructure and smart devices. Integrated solutions span across four key domains – telecom networks, IT, smart devices, and cloud services. Huawei is committed to bringing digital to every person, home and organization for a fully connected, intelligent world. Huawei Canada focuses on fundamental research and development aimed at solving complex technical problems in emerging technologies like 5G, AI, Human Computer Interaction and Autonomous Driving. With ongoing research initiatives with 10 Universities across Canada and strategic collaboration agreements with several Universities, we support Canada's rich research community.
Data Scientist at Huawei Technologies Canada Co., Ltd. - Burnaby, BC, Canada
With 194,000 employees and operating in more than 170 countries and regions, Huawei is a leading global creator and provider of information and communications technology (ICT) infrastructure and smart devices. Integrated solutions span across four key domains – telecom networks, IT, smart devices, and cloud services. Huawei is committed to bringing digital to every person, home and organization for a fully connected, intelligent world. Huawei Canada focuses on fundamental research and development aimed at solving complex technical problems in emerging technologies like 5G, AI, Human Computer Interaction and Autonomous Driving. With ongoing research initiatives with 10 Universities across Canada and strategic collaboration agreements with several Universities, we support Canada's rich research community.
Mining Minority-class Examples With Uncertainty Estimates
Singh, Gursimran, Chu, Lingyang, Wang, Lanjun, Pei, Jian, Tian, Qi, Zhang, Yong
In the real world, the frequency of occurrence of objects is naturally skewed forming long-tail class distributions, which results in poor performance on the statistically rare classes. A promising solution is to mine tail-class examples to balance the training dataset. However, mining tail-class examples is a very challenging task. For instance, most of the otherwise successful uncertainty-based mining approaches struggle due to distortion of class probabilities resulting from skewness in data. In this work, we propose an effective, yet simple, approach to overcome these challenges. Our framework enhances the subdued tail-class activations and, thereafter, uses a one-class data-centric approach to effectively identify tail-class examples. We carry out an exhaustive evaluation of our framework on three datasets spanning over two computer vision tasks. Substantial improvements in the minority-class mining and fine-tuned model's performance strongly corroborate the value of our proposed solution.